perm filename WEIZEN.1[PUB,JMC] blob
sn#207600 filedate 1976-03-19 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 ā19-MAR-76 1343 FTP:BEN at MIT-AI
C00008 ENDMK
Cā;
ā19-MAR-76 1343 FTP:BEN at MIT-AI
Date: 19 MAR 1976 1638-EST
From: BEN at MIT-AI
To: jmc at SU-AI
John,
Thank you for your review, which I thought was very well done
and to the point. Several weeks ago, a few of us had a short seminar
with Joe Weizenbaum, after which I wrote up my thoughts about the
GOOD points he makes (often poorly) in his book. I am enclosing those
thoughts below.
-----------------
"There are more things in Heaven and Earth, Horatio,
than are dreamt of in your philosophy."
-- Hamlet, Act I, Scene 5.
Some thoughts on Joe Weizenbaum's book, "Computer Power and Human Reason."
This book seems to contain some important concerns which are obscured by
harsh and sometimes shrill accusations against the Artificial
Intelligence research community. Here I attempt to make those points in
a calmer way.
1. It is important for a scientist to realize that the descriptive methods
of his field capture only one aspect of the phenomena he studies.
2. A scientist should recognize the difference between descriptive and
prescriptive statements. Descriptive statements can be based on scientific
investigation, prescriptive statements are based on values. A belief that
value judgements are trivial can lead the unwise to believe that prescriptive
conclusions follow directly from descriptive data.
3. Point 1 notwithstanding, it is a matter of personal faith whether
there are aspects of the world which cannot be fully described by some
scientific (i.e. empirical) method. It is clear, of course, that many
important aspects of the world are beyond our current scientific methods.
4. Assuming that we find it possible to build an intelligent computer,
there will inevitably be an enormous cultural gulf between it and humans.
Social scientists can say a great deal about the amount of common culture
which is required between a professional and a client in many cases, such
as a psychiatrist or a judge. This would make the use of a computer in
one of these roles inappropriate as a technical judgment, rather than as
a moral judgment.
5. JW seems worried that hackers with no understanding of theory will
produce intelligent programs, which therefore do not contribute to our
understanding of Man. It seems exceedingly unlikely that the very difficult
problems of intelligence can be solved without a deep theory. The overwhelming
impression of workers in AI is that people are very good at solving problems
which are computationally very difficult. Familiarity breeds great respect,
not contempt.
6. It is important to clarify the nature of responsibility for actions of
computer programs. As with other machines, the computer is used today as
a tool of humans, who should bear the responsibility for their actions. The
difficulty is that computers have a much greater capacity for independent
and unanticipated action than other machines. However, until we know what
it would mean for a computer to take responsibility for its actions, it
must be emphatically clear that a human is responsible for any act of a
machine. This is a particularly important point for the general public.
7. It is a bad thing to encourage people to take a simplistic view of Man.
It is important for us in AI to emphasize that our work with computer models
of human intelligence increases our wonder and respect for human beings,
rather than reducing Man to a simple mechanism. The most important potential
misunderstanding of our work lies in ethics: one's ethical responsibilities
to a machine are apparently much less than to a human. It would be bad to
undercut the ethical status of Man by encouraging the belief that Man is
"only a machine."
Benjamin Kuipers
MIT AI Lab
[arpanet: BEN @ MIT-AI]
-------